Forget-me-net: Overcoming catastrophic forgetting in back- propagation neural networks

نویسندگان

  • Abdallah El Ali
  • Loes Bazen
  • Iris Groen
  • Elisa Hermanides
  • Wouter Kool
  • David Neville
  • Kendall Rattner
  • Jaap Murre
چکیده

Various methods to overcome the catastrophic interference effect in backpropagation networks are directly compared on a simple learning task. Interleaved learning delivered the best results: in a backpropagation network the pattern “McClelland” was retained after learning the pattern “soup”. Neither the implementation of a sharpening function, nor adjustment of the activation function improved retention. These results indicate that catastrophic interference can be overcome by interleaved learning.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Evolving Neural Networks That Suffer Minimal Catastrophic Forgetting

Catastrophic forgetting is a well-known failing of many neural network systems whereby training on new patterns causes them to forget previously learned patterns. Humans have evolved mechanisms to minimize this problem, and in this paper we present our preliminary attempts to use simulated evolution to generate neural networks that suffer significantly less from catastrophic forgetting than tra...

متن کامل

Overcoming Catastrophic Interference by Conceptors

Catastrophic interference has been a major roadblock in the research of continual learning. Here we propose a variant of the back-propagation algorithm, “conceptor-aided back-prop” (CAB), in which gradients are shielded by conceptors against degradation of previously learned tasks. Conceptors have their origin in reservoir computing, where they have been previously shown to overcome catastrophi...

متن کامل

Catastrophic forgetting in connectionist networks.

All natural cognitive systems, and, in particular, our own, gradually forget previously learned information. Plausible models of human cognition should therefore exhibit similar patterns of gradual forgetting of old information as new information is acquired. Only rarely does new learning in natural cognitive systems completely disrupt or erase previously learned information; that is, natural c...

متن کامل

Less-forgetting Learning in Deep Neural Networks

A catastrophic forgetting problem makes deep neural networks forget the previously learned information, when learning data collected in new environments, such as by different sensors or in different light conditions. This paper presents a new method for alleviating the catastrophic forgetting problem. Unlike previous research, our method does not use any information from the source domain. Surp...

متن کامل

Sequential Learning in Distributed Neural Networks without Catastrophic Forgetting: A Single and Realistic Self-Refreshing Memory Can Do It

− In sequential learning tasks artificial distributed neural networks forget catastrophically, that is, new learned information most often erases the one previously learned. This major weakness is not only cognitively implausible, as human gradually forget, but disastrous for most practical applications. An efficient solution to catastrophic forgetting has been recently proposed for backpropaga...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008